48 research outputs found

    Neural Network Formalization

    Get PDF
    In order to assist the field of neural networks in its maturing, a formalization and a solid foundation are essential. Additionally, to permit the introduction of formal proofs, it is essential to have an all encompassing formal mathematical definition of a neural network. Most neural networks, even biological ones, exhibit a layered structure. This publication shows that all neural networks can be represented as layered structures. This layeredness is therefore chosen as the basis for a formal neural network framework. This publication offers a neural network formalization consisting of a topological taxonomy, a uniform nomenclature, and an accompanying consistent mnemonic notation. Supported by this formalization, both a flexible hierarchical and a universal mathematical definition are presented

    Optimal Setting of Weights, Learning Rate, and Gain

    Get PDF
    The optimal setting of the initial weights, learning rate, and gain of the activation function, which are key parameters of a neural network, influencing training time and generalization performance, are investigated by means of a large number of experiments using ten benchmarks using high order perceptrons. The results are used to illustrate the influence of these key parameters on the training time and generalization performance and permit general conclusions to be drawn on the behavior of high order perceptrons, some of which can be extended to the behavior of multilayer perceptrons. Furthermore, optimal values for the learning rate and the gain of the activation function are found and compared to those recommended by existing heuristics

    Neural Network Adaptations to Hardware Implementations

    Get PDF
    In order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential. However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of the various issues that are encountered when mapping an ideal neural network model onto a compact and reliable neural network hardware implementation, like quantization, handling nonuniformities and nonideal responses, and restraining computational complexity. Furthermore, a broad range of hardware-friendly learning rules is presented, which allow for simpler and more reliable hardware implementations. The relevance of these neural network adaptations to hardware is illustrated by their application in existing hardware implementations

    High Order and Multilayer Perceptron Initialization

    Get PDF
    Proper initialization is one of the most important prerequisites for fast convergence of feed-forward neural networks like high order and multilayer perceptrons. This publication aims at determining the optimal variance (or range) for the initial weights and biases, which is the principal parameter of random initialization methods for both types of neural networks. An overview of random weight initialization methods for multilayer perceptrons is presented. These methods are extensively tested using eight real- world benchmark data sets and a broad range of initial weight variances by means of more than 30,00030,000 simulations, in the aim to find the best weight initialization method for multilayer perceptrons. For high order networks, a large number of experiments (more than 200,000200,000 simulations) was performed, using three weight distributions, three activation functions, several network orders, and the same eight data sets. The results of these experiments are compared to weight initialization techniques for multilayer perceptrons, which leads to the proposal of a suitable initialization method for high order perceptrons. The conclusions on the initialization methods for both types of networks are justified by sufficiently small confidence intervals of the mean convergence times

    Adaptive Multilayer Optical Neural Network Design

    Get PDF
    An adaptive multilayer dual-wavelength optical neural network design with all-optical forward propagation, based on a large number of modifiable optical interconnections and a special weight discretization algorithm to compensate for em noise, is described. The presentation of input and interconnection weights is performed by liquid crystal television screens, and optical thresholding at the hidden layer by a liquid crystal light valve

    The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks

    Get PDF
    The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the non-standard gain of optical sigmoids for optical neural networks

    Image Classification by Neural Networks for the Quality Control of Watches

    Get PDF
    A method is presented for the automatic time detection of watches, where the hands are classified by a neural network. In order to reduce the overall cost of data collection, strict limits were imposed on the data collection time. This constraint severely limits the available amount of images, and poses the challenge to solve the hand recognition problem with a minimum amount of training and test data. Two neural network approaches are presented together with their performance results, which show an excellent final recognition rate

    Modular Object-Oriented Neural Network Simulators and Topology Generalizations

    Get PDF
    A growing number of neural networks are based on topologies that deviate from the standard fixed first order fully interlayer connected ones. Although there currently exist a variety of neural network simulators, few are flexible enough to facilitate substantial topology alterations. Some novel modular object-oriented neural network simulators promise modifications and extensions to be made with minimal effort. Two of these simulators are described and compared: OpenSimulator (version 3.1) and Sesame (version 4.5). An extension of these simulators to high-order and ontogenic neural networks is outlined
    corecore